Bandits with Knapsacks

 

 

Ashwinkumar Badanidiyuru, Cornell

Monday, January 27, 2014
4:00pm 310 Gates Hall

Abstract:

Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in machine learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising, to dynamic pricing. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called “bandits with knapsacks,” that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.

We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors.

Joint work with Bobby Kleinberg and Alex Slivkins. Appeared at FOCS 2013.